Communication Compression for Distributed Nonconvex Optimization

نویسندگان

چکیده

This paper considers distributed nonconvex optimization with the cost functions being over agents. Noting that information compression is a key tool to reduce heavy communication load for algorithms as agents iteratively communicate neighbors, we propose three primal–dual compressed communication. The first two are applicable general class of compressors bounded relative error and third algorithm suitable classes absolute error. We show proposed have comparable convergence properties state-of-the-art exact Specifically, they can find first-order stationary points sublinear rate $\mathcal {O}(1/T)$ when each local function smooth, where notation="LaTeX">$T$ total number iterations, global optima linear under an additional condition satisfies Polyak–Łojasiewicz condition. Numerical simulations provided illustrate effectiveness theoretical results.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Nonconvex Optimization for Communication Systems

Convex optimization has provided both a powerful tool and an intriguing mentality to the analysis and design of communication systems over the last few years. A main challenge today is on nonconvex problems in these application. This paper presents an overview of some of the important nonconvex optimization problems in point-to-point and networked communication systems. Three typical applicatio...

متن کامل

Nonconvex Optimization for Communication Networks

Nonlinear convex optimization has provided both an insightful modeling language and a powerful solution tool to the analysis and design of communication systems over the last decade. A main challenge today is on nonconvex problems in these applications. This chapter presents an overview on some of the important nonconvex optimization problems in communication networks. Four typical applications...

متن کامل

Parallel and Distributed Methods for Nonconvex Optimization-Part I: Theory

In this two-part paper, we propose a general algorithmic framework for the minimization of a nonconvex smooth function subject to nonconvex smooth constraints. The algorithm solves a sequence of (separable) strongly convex problems and mantains feasibility at each iteration. Convergence to a stationary solution of the original nonconvex optimization is established. Our framework is very general...

متن کامل

An Augmented Lagrangian Based Algorithm for Distributed NonConvex Optimization

This paper is about distributed derivative-based algorithms for solving optimization problems with a separable (potentially nonconvex) objective function and coupled affine constraints. A parallelizable method is proposed that combines ideas from the fields of sequential quadratic programming and augmented Lagrangian algorithms. The method negotiates shared dual variables that may be interprete...

متن کامل

Quasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization

Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Automatic Control

سال: 2022

ISSN: ['0018-9286', '1558-2523', '2334-3303']

DOI: https://doi.org/10.1109/tac.2022.3225515